29 research outputs found
Nouveaux médias et orthographe. Incompétence ou pluricompétence ?
La prĂ©sente Ă©tude sâintĂ©resse Ă lâexistence dâune pluricompĂ©tence qui permettrait aux utilisateurs de nouveaux mĂ©dias de communication de passer de lâĂ©crit traditionnel Ă la CEMO (communication Ă©crite mĂ©diĂ©e par ordinateur) de la mĂȘme façon quâils changent de registre. Nous avons rĂ©coltĂ© les productions Ă©crites de jeunes de 14 Ă 15 ans Ă travers deux supports (Ă©lectronique / papier) et dans trois situations de communication (dictĂ©e, activitĂ© en classe, Facebook) afin dâĂ©tudier lâinfluence de ces variables sur la gestion de lâorthographe. Les rĂ©sultats aux dictĂ©es indiquent un niveau relativement bas (une erreur tous les 5 ou 6 mots) avec une majoritĂ© dâerreurs grammaticales, ce qui est conforme aux Ă©tudes prĂ©cĂ©demment menĂ©es sur le sujet. Lâobservation des unitĂ©s communes aux trois corpus montre que lâon retrouve la forme graphique standard dans au moins un des corpus (sinon plusieurs), et ce, chez tous les Ă©lĂšves. Le mĂȘme type dâanalyse dâunitĂ©s communes menĂ©e sur le corpus Facebook uniquement montre que la forme standard est maĂźtrisĂ©e dans un grand nombre de cas (88 % des formes) par les Ă©lĂšves. Enfin, nous observons que la palette de variantes graphiques utilisĂ©e dans les conversations Facebook est assez limitĂ©e (principalement abrĂ©viations, smileys et caractĂšres Ă©chos) et que le taux de compression des formes est assez faible, indiquant que la plupart des formes sont respectĂ©es dans leur totalitĂ© ou rĂ©duites dâun seul caractĂšre.The present study investigates the hypothesis of a pluri-competence enabling new information and communication technology users to switch between traditional writing and computer-mediated communication as they change from one register to another. We collected young peopleâs (aged 14-15) written production across different media (electronic/paper) and communication situations (dictation, class activity, Facebook) in order to study the influence of these variables on the studentsâ spelling. The results obtained through the dictations show that the studentsâ level is relatively low (one mistake every 5 or 6 words) with a majority of grammatical mistakes, which is in line with previous studies on the subject. The analysis of linguistic units common to the three corpora indicates that all the participants use traditional spelling in at least one of the corpora. The same type of analysis conducted on the Facebook corpus shows that the teenagers master standard spelling in most cases (88% of the forms). Finally, we observe only a limited range of spelling variations in the Facebook conversations as well as a low compression ratio, which indicates that the linguistic units are rarely shortened
Automation of dictation exercises. A working combination of CALL and NLP.
This article is in the context of the Computer-Assisted Language Learning (CALL) frame- work, and addresses more specifically the automation of dictation exercises. It presents a method for correcting learners' copies. Based around Natural Language Processing (NLP) tools, this method is original in two respects. First, it exploits the composition of finite- state machines, to both detect and delimit the errors. Second, it uses automatic morpho- syntactic analysis of the original dictation, which makes it easier to produce superficial and in-depth linguistic feedback. The system has been evaluated on a corpus of 115 copies including 1,532 copy errors. The accuracy of the error detection is 99%. The superficial feedback is 97.2% correct, the in-depth feedback 96%, and the morpho-syntactic analysis 87.7%
Le TAL au service de lâALAO/ELAO. Lâexemple des exercices de dicteÌe automatiseÌs.
Ce papier sâinscrit dans le cadre geÌneÌral de lâApprentissage et de lâEnseignement des Langues AssisteÌs par Ordinateur, et concerne plus particulieÌrement lâautomatisation des exercices de dicteÌe. Il preÌsente une meÌthode de correction des copies dâapprenants qui se veut originale en deux points. PremieÌrement, la meÌthode exploite la composition dâautomates aÌ eÌtats finis pour deÌtecter et pour analyser les erreurs. DeuxieÌmement, elle repose sur une analyse morphosyntaxique automatique de lâoriginal de la dicteÌe, ce qui facilite la production de diagnostics.This paper comes within the scope of the Computer Assisted Language Learning framework, and addresses more especially the automation of dictation exercises. It presents a correction method of learnersâ copies that is original in two ways. First, the method exploits the composition of finite-state automata, to both detect and analyze the errors. Second, it relies on an automatic morphosyntactic analysis of the original dictation, which makes it easier to produce diagnoses
PLATON, un outil de dictĂ©e automatique au service de lâapprentissage de lâorthographe
PLATON est une plateforme en ligne (http://cental.uclouvain.be/platon) qui s'inscrit dans le cadre gĂ©nĂ©ral de l'apprentissage et de l'enseignement des langues assistĂ©s par ordinateur (ALAO/ELAO). La plateforme est plus particuliĂšrement dĂ©diĂ©e Ă lâorthographe. Lâexercice central de la plateforme est la dictĂ©e scolaââire. PLATON a pour objectif de dĂ©passer lâexercice traditionnel de dictĂ©e scolaire et de proposer aux apprenants qui utilisent le service un outil dâĂ©valuation formative et dâaide au diagnostic. Durant l'annĂ©e scolaire 2012/2013, la plateforme a Ă©tĂ© testĂ©e en conditions rĂ©elles par des apprenants et des professeurs de diffĂ©rents niveaux (primaire, secondaire, universitaire) qui en ont ensuite Ă©valuĂ© l'intĂ©rĂȘt pĂ©dagogique et la convivialitĂ©. Dans cet article, nous prĂ©sentons les rĂ©flexions pĂ©dagogiques qui ont menĂ© au dĂ©veloppement de PLATON. Ensuite, nous prĂ©sentons les fonctionnalitĂ©s gĂ©nĂ©rales du service. Enfin, nous Ă©tudions les rĂ©sultats obtenus lors de lâĂ©valuation de lâoutil par les utilisateurs sur la base desquels nous envisageons les modifications Ă apporter Ă la plateforme pour une diffusion Ă plus large Ă©chelle et pour diffĂ©rents niveaux d'enseignement.PLATON is an online computer-assisted language-learning (CALL) platform (http://cental.uclouvain.be/platonâ), which focuses on spelling and proposes exercises of school dictation. PLATON goes beyond traditional dictation exercises by offering a formative evaluation tool that help users to diagnose their weaknesses. During the 2012/2013 school year, the platform was tested in real conditions by learners and teachers from different levels (primary, secondary and university). Users were asked to assess the educational value of the tool and to rate the user experience as well. In this paper, we first present the educational motivations and the background ideas behind PLATON. Then, we present the main functionalities of the service and finally, we propose a detailed analysis of the results obtained through the evaluation campaign. On that basis, we consider potential modifications of the platform to allow for its distribution on a larger scale and its application to different teaching levels
Variations prosodiques en synthĂšse par sĂ©lection dâunitĂ©s: lâexemple des phrases interrogatives
Cet article propose une meÌthode automatique dâaugmentation des variations prosodiques en syntheÌse par seÌlection dâuniteÌs. Plus particulieÌrement, nous nous sommes inteÌresseÌs aÌ la syntheÌse de phrases interrogatives au sein du systeÌme de syntheÌse eLite, qui proceÌde par seÌlection dâuniteÌs non uniformes et qui ne posseÌde pas les uniteÌs neÌcessaires aÌ la production de questions dans sa base de donneÌes. Lâobjectif de ce travail a eÌteÌ de pouvoir produire des interrogatives via ce systeÌme de syntheÌse, sans pour autant enregistrer une nouvelle base de donneÌes pour la seÌlection des uniteÌs. ApreÌs avoir deÌcrit les pheÌnomeÌnes syntaxiques et prosodiques en jeu dans lâeÌnonciation de phrases interrogatives, nous preÌsentons la meÌthode deÌveloppeÌe, qui allie preÌ-traitement des cibles aÌ rechercher dans la base de donneÌes, et post-traitement du signal de parole lorsquâil a eÌteÌ geÌneÌreÌ. Une eÌvaluation perceptive des phrases syntheÌtiseÌes via notre application nous a permis de percevoir lâinteÌreÌt du post-traitement en syntheÌse et de pointer les preÌcautions quâun tel traitement implique.This paper proposes an automatic method to increase the number of possible prosodic variations in non-uniform unit-based speech synthesis. More specifically, we are interested in the production of interrogative sentences through the eLite text-to-speech synthesis system, which relies on the selection of non-uniform units, but does not have interrogative units in its speech database. The purpose of this work was to make the system able to synthesize interrogative sentences without having to record a new, interrogative database. After a study of the syntactic and prosodic phenomena involved in the production of interrogative sentences, we present our two-step method: an adapted pre-processing of the unit selection itself, and a post-processing of the whole speech signal built by the system. A perceptual evaluation of sentences synthesized by our approach is then described, which points out both pros and cons of the method and highlights some issues in the very principles of the eLite system
CEFR-based Short Answer Grading
The project through which the corpus was collected is concerned with the task of automatically assessing the written proficiency level of non-native (L2) learners of English. Drawing on previous research on automated L2 writing assessment following the Common European Framework of Reference for Languages (CEFR), we investigate the possibilities and difficulties of deriving the CEFR level from short answers to open-ended questions, which has not yet been subjected to numerous studies up to date. The object of our study is twofold: to examine the intricacy involved with both human and automated CEFR-based grading of short answers. First, we compiled a learner corpus of short answers graded with CEFR levels by three certified Cambridge examiners. Next, we used the corpus to develop a soft-voting system for the automated CEFR-based grading of short answers
Prominence perception and accent detection in French. A corpus-based account
The goal of this paper is to shed new light on the accentuation
in French, more precisely to discuss the role of grammatical
constraints and of phonetic factors implied in the perception of
French final and non final accent. The study is based on the
analysis of a 70-minute long corpus, including various
speaking styles. The corpus has been annotated manually and
automatically for prominence detection and tagged semiautomatically
for grammatical categories. We first describe the
rate of accentuation for each grammatical category (discussing
the notion of âcliticâ in French) and then discuss the
divergences between the manual and automatic prominence
detection, in relation with the phonological structure